本文旨在研究基于电路的混合量子卷积神经网络(QCNNS)如何在遥感的上下文中成功地在图像分类器中成功使用。通过在标准神经网络内引入量子层来丰富CNN的经典架构。本工作中提出的新型QCNN应用于土地使用和陆地覆盖(LULC)分类,选择为地球观测(EO)用例,并在欧元区数据集上测试用作参考基准。通过证明QCNN性能高于经典对应物,多标量分类的结果证明了所提出的方法的有效性。此外,各种量子电路的研究表明,利用量子纠缠的诸如最佳分类评分。本研究强调了将量子计算应用于EO案例研究的潜在能力,并为期货调查提供了理论和实验背景。
translated by 谷歌翻译
Sunquakes are seismic emissions visible on the solar surface, associated with some solar flares. Although discovered in 1998, they have only recently become a more commonly detected phenomenon. Despite the availability of several manual detection guidelines, to our knowledge, the astrophysical data produced for sunquakes is new to the field of Machine Learning. Detecting sunquakes is a daunting task for human operators and this work aims to ease and, if possible, to improve their detection. Thus, we introduce a dataset constructed from acoustic egression-power maps of solar active regions obtained for Solar Cycles 23 and 24 using the holography method. We then present a pedagogical approach to the application of machine learning representation methods for sunquake detection using AutoEncoders, Contrastive Learning, Object Detection and recurrent techniques, which we enhance by introducing several custom domain-specific data augmentation transformations. We address the main challenges of the automated sunquake detection task, namely the very high noise patterns in and outside the active region shadow and the extreme class imbalance given by the limited number of frames that present sunquake signatures. With our trained models, we find temporal and spatial locations of peculiar acoustic emission and qualitatively associate them to eruptive and high energy emission. While noting that these models are still in a prototype stage and there is much room for improvement in metrics and bias levels, we hypothesize that their agreement on example use cases has the potential to enable detection of weak solar acoustic manifestations.
translated by 谷歌翻译
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs). Quantization is a technique for making neural networks more efficient by running them using low-bit integer arithmetic and is therefore commonly adopted in industry. Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization, and certification of the quantized representation is necessary to guarantee robustness. In this work, we present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs. Inspired by advances in robust learning of non-quantized networks, our training algorithm computes the gradient of an abstract representation of the actual network. Unlike existing approaches, our method can handle the discrete semantics of QNNs. Based on QA-IBP, we also develop a complete verification procedure for verifying the adversarial robustness of QNNs, which is guaranteed to terminate and produce a correct answer. Compared to existing approaches, the key advantage of our verification procedure is that it runs entirely on GPU or other accelerator devices. We demonstrate experimentally that our approach significantly outperforms existing methods and establish the new state-of-the-art for training and certifying the robustness of QNNs.
translated by 谷歌翻译
Chronic pain is a multi-dimensional experience, and pain intensity plays an important part, impacting the patients emotional balance, psychology, and behaviour. Standard self-reporting tools, such as the Visual Analogue Scale for pain, fail to capture this burden. Moreover, this type of tools is susceptible to a degree of subjectivity, dependent on the patients clear understanding of how to use it, social biases, and their ability to translate a complex experience to a scale. To overcome these and other self-reporting challenges, pain intensity estimation has been previously studied based on facial expressions, electroencephalograms, brain imaging, and autonomic features. However, to the best of our knowledge, it has never been attempted to base this estimation on the patient narratives of the personal experience of chronic pain, which is what we propose in this work. Indeed, in the clinical assessment and management of chronic pain, verbal communication is essential to convey information to physicians that would otherwise not be easily accessible through standard reporting tools, since language, sociocultural, and psychosocial variables are intertwined. We show that language features from patient narratives indeed convey information relevant for pain intensity estimation, and that our computational models can take advantage of that. Specifically, our results show that patients with mild pain focus more on the use of verbs, whilst moderate and severe pain patients focus on adverbs, and nouns and adjectives, respectively, and that these differences allow for the distinction between these three pain classes.
translated by 谷歌翻译
Specular microscopy assessment of the human corneal endothelium (CE) in Fuchs' dystrophy is challenging due to the presence of dark image regions called guttae. This paper proposes a UNet-based segmentation approach that requires minimal post-processing and achieves reliable CE morphometric assessment and guttae identification across all degrees of Fuchs' dystrophy. We cast the segmentation problem as a regression task of the cell and gutta signed distance maps instead of a pixel-level classification task as typically done with UNets. Compared to the conventional UNet classification approach, the distance-map regression approach converges faster in clinically relevant parameters. It also produces morphometric parameters that agree with the manually-segmented ground-truth data, namely the average cell density difference of -41.9 cells/mm2 (95% confidence interval (CI) [-306.2, 222.5]) and the average difference of mean cell area of 14.8 um2 (95% CI [-41.9, 71.5]). These results suggest a promising alternative for CE assessment.
translated by 谷歌翻译
卷积神经网络(CNN)是计算机视觉(CV)中最受欢迎的人工神经网络(ANN)的模型之一。研究人员开发了各种基于CNN的结构,以解决图像分类,对象检测和图像相似性测量等问题。尽管CNN在大多数情况下显示出其价值,但它们仍然有缺点:当数据集中没有足够的样本时,它们很容易过度。大多数医疗图像数据集是此类数据集的示例。此外,许多数据集还包含设计的功能和图像,但是CNN只能直接处理图像。这是一个错过的机会来利用其他信息。因此,我们提出了一种基于CNN的模型的新结构:Compnet,一个复合卷积神经网络。这是一个专门设计的神经网络,可以接受图像和设计功能的组合作为输入,以利用所有可用信息。这种结构的新颖性是,它使用从图像到重量设计的功能学习的功能,以便从图像和设计功能中获取所有信息。随着该结构在分类任务上的使用,结果表明我们的方法有能力显着减少过度拟合。此外,我们还发现了其他研究人员提出的几种类似的方法,可以结合图像和设计功能。为了进行比较,我们首先在LIDC上应用了这些类似的方法,并将结果与​​Compnet结果进行了比较,然后我们将COMPNET应用于数据集中,这些方法最初在其作品中最初使用,并将结果与​​他们在论文中提出的结果进行了比较。 。所有这些比较结果表明,我们的模型在LIDC数据集或其提议的数据集上的分类任务上优于这些类似的方法。
translated by 谷歌翻译
病变分割是放射线工作流程的关键步骤。手动分割需要长时间的执行时间,并且容易发生可变性,从而损害了放射线研究及其鲁棒性的实现。在这项研究中,对非小细胞肺癌患者的计算机断层扫描图像进行了深入学习的自动分割方法。还评估了手动与自动分割在生存放射模型的性能中的使用。方法总共包括899名NSCLC患者(2个专有:A和B,1个公共数据集:C)。肺部病变的自动分割是通过训练先前开发的建筑NNU-NET进行的,包括2D,3D和级联方法。用骰子系数评估自动分割的质量,以手动轮廓为参考。通过从数据集A的手动和自动轮廓中提取放射性的手工制作和深度学习特征来探索自动分割对患者生存的放射素模型对患者生存的性能的影响。评估并比较模型的精度。结果通过平均2D和3D模型的预测以及应用后处理技术来提取最大连接的组件,可以实现具有骰子= 0.78 +(0.12)的自动和手动轮廓之间的最佳一致性。当使用手动或自动轮廓,手工制作或深度特征时,在生存模型的表现中未观察到统计差异。最好的分类器显示出0.65至0.78之间的精度。结论NNU-NET在自动分割肺部病变中的有希望的作用已得到证实,从而大大降低了时必的医生的工作量,而不会损害基于放射线学的生存预测模型的准确性。
translated by 谷歌翻译
在这项工作中,我们研究了沉重的尾部噪声下的随机亚级别方法的高概率边界。在这种情况下,仅假定噪声具有有限的方差,而不是次高斯的分布,众所周知,标准亚级别方法具有很高的概率边界。我们分析了投影的随机亚级别方法的剪裁版本,其中每当具有大规范时,亚级别估计值都会被截断。我们表明,这种剪裁策略既导致了许多经典平均方案的任何时间和有限的地平线界限。初步实验显示以支持该方法的有效性。
translated by 谷歌翻译
图像字幕是当前的研究任务,用于使用场景中的对象及其关系来描述图像内容。为了应对这项任务,使用了两个重要的研究领域,人为的视觉和自然语言处理。在图像字幕中,就像在任何计算智能任务中一样,性能指标对于知道方法的性能(或坏)至关重要。近年来,已经观察到,基于n-gram的经典指标不足以捕获语义和关键含义来描述图像中的内容。为了衡量或不进行最新指标的集合,在本手稿中,我们对使用众所周知的COCO数据集进行了对几种图像字幕指标的评估以及它们之间的比较。为此,我们设计了两种情况。 1)一组人工构建字幕,以及2)比较某些最先进的图像字幕方法的比较。我们试图回答问题:当前的指标是否有助于制作高质量的标题?实际指标如何相互比较?指标真正测量什么?
translated by 谷歌翻译
基于深度学习的解决方案正在为各种应用程序成功实施。最值得注意的是,临床用例已增加了兴趣,并且是过去几年提出的一些尖端数据驱动算法背后的主要驱动力。对于诸如稀疏视图重建等应用,其中测量数据的量很少,以使获取时间短而且辐射剂量较低,降低了串联的伪像,促使数据驱动的DeNoINEDENO算法的开发,其主要目标是获得获得的主要目标。只有一个全扫描数据的子集诊断可行的图像。我们提出了WNET,这是一个数据驱动的双域denoising模型,其中包含用于稀疏视图deNoising的可训练的重建层。两个编码器 - 模型网络同时在正式和重建域中执行deno,而实现过滤后的反向投影算法的第三层则夹在前两种之间,并照顾重建操作。我们研究了该网络在稀疏视图胸部CT扫描上的性能,并突出显示了比更传统的固定层具有可训练的重建层的额外好处。我们在两个临床相关的数据集上训练和测试我们的网络,并将获得的结果与三种不同类型的稀疏视图CT CT DeNoisis和重建算法进行了比较。
translated by 谷歌翻译